Marcelo Magnasco: Sparse time-frequency representations and the neural coding of sound
Movies Preview
Share or Embed This Item
Flag this item for
movies
Marcelo Magnasco: Sparse time-frequency representations and the neural coding of sound
Publication date
2008
Topics
theoretical neuroscence
Talk given for the Redwood Center for Theoretical Neuroscience on February 13, 2008. Speaker is Marcelo Magnasco of Rockefeller University.
Abstract.
Auditory neurons preserve exquisite temporal information about sound features, but we do not know how the brain uses this information to parse the rapidly changing sounds of the natural world. A simple argument for making effective use of temporal information in the auditory nerve leads us to consider the reassignment class of time-frequency representations as a potential model of auditory processing. We show that these representations are sparse even for spectrally dense signals. Many details of complex sounds that are virtually undetectable in standard sonograms are readily perceptible and visible in reassignment; as the only known class of time-frequency representations that is always ‘‘in focus’’ this methodology may help explain the remarkable acuity of auditory perception. We also consider how to determine, experimentally, when a neural code embeds information in the detailed timing of spikes. We show that standard ``spike-triggered'' receptive field constructions are inadequate to extract this level of information and present a new method, ``differential reverse correlations'', based on correlating small changes in spike timing due to small changes to the stimulus.
Abstract.
Auditory neurons preserve exquisite temporal information about sound features, but we do not know how the brain uses this information to parse the rapidly changing sounds of the natural world. A simple argument for making effective use of temporal information in the auditory nerve leads us to consider the reassignment class of time-frequency representations as a potential model of auditory processing. We show that these representations are sparse even for spectrally dense signals. Many details of complex sounds that are virtually undetectable in standard sonograms are readily perceptible and visible in reassignment; as the only known class of time-frequency representations that is always ‘‘in focus’’ this methodology may help explain the remarkable acuity of auditory perception. We also consider how to determine, experimentally, when a neural code embeds information in the detailed timing of spikes. We show that standard ``spike-triggered'' receptive field constructions are inadequate to extract this level of information and present a new method, ``differential reverse correlations'', based on correlating small changes in spike timing due to small changes to the stimulus.
Color
color
Identifier
Redwood_Center_2008_02_13_Marcelo_Magnasco
Run time
1.5 hours
Sound
sound
Year
2008
comment
Reviews
There are no reviews yet. Be the first one to
write a review.
SIMILAR ITEMS (based on metadata)
Community Video
by
Redwood Center for Theoretical Neuroscience
movies
eye 217
favorite 0
comment 0
Community Video
by
Redwood Center for Theoretical Neuroscience
movies
eye 174
favorite 0
comment 0
Community Video
by
Redwood Center for Theoretical Neuroscience
movies
eye 373
favorite 0
comment 0
Community Video
by
Redwood Center for Theoretical Neuroscience
movies
eye 330
favorite 0
comment 0
Community Video
by
Redwood Center for Theoretical Neuroscience
movies
eye 107
favorite 0
comment 0
Community Video
by
Redwood Center for Theoretical Neuroscience
movies
eye 121
favorite 0
comment 0
Community Video
by
Redwood Center for Theoretical Neuroscience
movies
eye 266
favorite 0
comment 0
Community Video
by
Redwood Center for Theoretical Neuroscience
movies
eye 277
favorite 0
comment 0
Community Video
by
Redwood Center for Theoretical Neuroscience
movies
eye 610
favorite 0
comment 0
Community Video
by
Redwood Center for Theoretical Neuroscience
movies